Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
NPJ Digit Med ; 6(1): 74, 2023 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-37100953

RESUMEN

Advancements in deep learning and computer vision provide promising solutions for medical image analysis, potentially improving healthcare and patient outcomes. However, the prevailing paradigm of training deep learning models requires large quantities of labeled training data, which is both time-consuming and cost-prohibitive to curate for medical images. Self-supervised learning has the potential to make significant contributions to the development of robust medical imaging models through its ability to learn useful insights from copious medical datasets without labels. In this review, we provide consistent descriptions of different self-supervised learning strategies and compose a systematic review of papers published between 2012 and 2022 on PubMed, Scopus, and ArXiv that applied self-supervised learning to medical imaging classification. We screened a total of 412 relevant studies and included 79 papers for data extraction and analysis. With this comprehensive effort, we synthesize the collective knowledge of prior work and provide implementation guidelines for future researchers interested in applying self-supervised learning to their development of medical imaging classification models.

2.
Nat Commun ; 14(1): 692, 2023 02 08.
Artículo en Inglés | MEDLINE | ID: mdl-36754966

RESUMEN

Huntington's disease (HD) is caused by an expanded CAG repeat in the huntingtin gene, yielding a Huntingtin protein with an expanded polyglutamine tract. While experiments with patient-derived induced pluripotent stem cells (iPSCs) can help understand disease, defining pathological biomarkers remains challenging. Here, we used cryogenic electron tomography to visualize neurites in HD patient iPSC-derived neurons with varying CAG repeats, and primary cortical neurons from BACHD, deltaN17-BACHD, and wild-type mice. In HD models, we discovered sheet aggregates in double membrane-bound organelles, and mitochondria with distorted cristae and enlarged granules, likely mitochondrial RNA granules. We used artificial intelligence to quantify mitochondrial granules, and proteomics experiments reveal differential protein content in isolated HD mitochondria. Knockdown of Protein Inhibitor of Activated STAT1 ameliorated aberrant phenotypes in iPSC- and BACHD neurons. We show that integrated ultrastructural and proteomic approaches may uncover early HD phenotypes to accelerate diagnostics and the development of targeted therapeutics for HD.


Asunto(s)
Enfermedad de Huntington , Células Madre Pluripotentes Inducidas , Animales , Ratones , Inteligencia Artificial , Modelos Animales de Enfermedad , Proteína Huntingtina/genética , Proteína Huntingtina/metabolismo , Enfermedad de Huntington/metabolismo , Células Madre Pluripotentes Inducidas/metabolismo , Mitocondrias/metabolismo , Neuronas/metabolismo , Fenotipo , Proteómica , Humanos
4.
Surg Endosc ; 37(4): 3010-3017, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36536082

RESUMEN

BACKGROUND: Intraoperative skills assessment is time-consuming and subjective; an efficient and objective computer vision-based approach for feedback is desired. In this work, we aim to design and validate an interpretable automated method to evaluate technical proficiency using colorectal robotic surgery videos with artificial intelligence. METHODS: 92 curated clips of peritoneal closure were characterized by both board-certified surgeons and a computer vision AI algorithm to compare the measures of surgical skill. For human ratings, six surgeons graded clips according to the GEARS assessment tool; for AI assessment, deep learning computer vision algorithms for surgical tool detection and tracking were developed and implemented. RESULTS: For the GEARS category of efficiency, we observe a positive correlation between human expert ratings of technical efficiency and AI-determined total tool movement (r = - 0.72). Additionally, we show that more proficient surgeons perform closure with significantly less tool movement compared to less proficient surgeons (p < 0.001). For the GEARS category of bimanual dexterity, a positive correlation between expert ratings of bimanual dexterity and the AI model's calculated measure of bimanual movement based on simultaneous tool movement (r = 0.48) was also observed. On average, we also find that higher skill clips have significantly more simultaneous movement in both hands compared to lower skill clips (p < 0.001). CONCLUSIONS: In this study, measurements of technical proficiency extracted from AI algorithms are shown to correlate with those given by expert surgeons. Although we target measurements of efficiency and bimanual dexterity, this work suggests that artificial intelligence through computer vision holds promise for efficiently standardizing grading of surgical technique, which may help in surgical skills training.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Cirujanos , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Inteligencia Artificial , Cirujanos/educación , Algoritmos , Computadores , Competencia Clínica
6.
NPJ Digit Med ; 5(1): 71, 2022 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-35676445

RESUMEN

Prostate cancer is the most frequent cancer in men and a leading cause of cancer death. Determining a patient's optimal therapy is a challenge, where oncologists must select a therapy with the highest likelihood of success and the lowest likelihood of toxicity. International standards for prognostication rely on non-specific and semi-quantitative tools, commonly leading to over- and under-treatment. Tissue-based molecular biomarkers have attempted to address this, but most have limited validation in prospective randomized trials and expensive processing costs, posing substantial barriers to widespread adoption. There remains a significant need for accurate and scalable tools to support therapy personalization. Here we demonstrate prostate cancer therapy personalization by predicting long-term, clinically relevant outcomes using a multimodal deep learning architecture and train models using clinical data and digital histopathology from prostate biopsies. We train and validate models using five phase III randomized trials conducted across hundreds of clinical centers. Histopathological data was available for 5654 of 7764 randomized patients (71%) with a median follow-up of 11.4 years. Compared to the most common risk-stratification tool-risk groups developed by the National Cancer Center Network (NCCN)-our models have superior discriminatory performance across all endpoints, ranging from 9.2% to 14.6% relative improvement in a held-out validation set. This artificial intelligence-based tool improves prognostication over standard tools and allows oncologists to computationally predict the likeliest outcomes of specific patients to determine optimal treatment. Outfitted with digital scanners and internet access, any clinic could offer such capabilities, enabling global access to therapy personalization.

7.
Sci Rep ; 11(1): 14306, 2021 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-34253767

RESUMEN

Surgeons must visually distinguish soft-tissues, such as nerves, from surrounding anatomy to prevent complications and optimize patient outcomes. An accurate nerve segmentation and analysis tool could provide useful insight for surgical decision-making. Here, we present an end-to-end, automatic deep learning computer vision algorithm to segment and measure nerves. Unlike traditional medical imaging, our unconstrained setup with accessible handheld digital cameras, along with the unstructured open surgery scene, makes this task uniquely challenging. We investigate one common procedure, thyroidectomy, during which surgeons must avoid damaging the recurrent laryngeal nerve (RLN), which is responsible for human speech. We evaluate our segmentation algorithm on a diverse dataset across varied and challenging settings of operating room image capture, and show strong segmentation performance in the optimal image capture condition. This work lays the foundation for future research in real-time tissue discrimination and integration of accessible, intelligent tools into open surgery to provide actionable insights.


Asunto(s)
Aprendizaje Profundo , Nervio Laríngeo Recurrente/cirugía , Enfermedades de la Tiroides/cirugía , Tiroidectomía/métodos , Humanos , Nervio Laríngeo Recurrente/patología , Enfermedades de la Tiroides/patología , Glándula Tiroides/patología , Glándula Tiroides/cirugía
10.
J Med Internet Res ; 23(3): e19461, 2021 03 15.
Artículo en Inglés | MEDLINE | ID: mdl-33720026

RESUMEN

BACKGROUND: Parents' use of mobile technologies may interfere with important parent-child interactions that are critical to healthy child development. This phenomenon is known as technoference. However, little is known about the population-wide awareness of this problem and the acceptability of artificial intelligence (AI)-based tools that help with mitigating technoference. OBJECTIVE: This study aims to assess parents' awareness of technoference and its harms, the acceptability of AI tools for mitigating technoference, and how each of these constructs vary across sociodemographic factors. METHODS: We administered a web-based survey to a nationally representative sample of parents of children aged ≤5 years. Parents' perceptions that their own technology use had risen to potentially problematic levels in general, their perceptions of their own parenting technoference, and the degree to which they found AI tools for mitigating technoference acceptable were assessed by using adaptations of previously validated scales. Multiple regression and mediation analyses were used to assess the relationships between these scales and each of the 6 sociodemographic factors (parent age, sex, language, ethnicity, educational attainment, and family income). RESULTS: Of the 305 respondents, 280 provided data that met the established standards for analysis. Parents reported that a mean of 3.03 devices (SD 2.07) interfered daily in their interactions with their child. Almost two-thirds of the parents agreed with the statements "I am worried about the impact of my mobile electronic device use on my child" and "Using a computer-assisted coach while caring for my child would help me notice more quickly when my device use is interfering with my caregiving" (187/281, 66.5% and 184/282, 65.1%, respectively). Younger age, Hispanic ethnicity, and Spanish language spoken at home were associated with increased technoference awareness. Compared to parents' perceived technoference and sociodemographic factors, parents' perceptions of their own problematic technology use was the factor that was most associated with the acceptance of AI tools. CONCLUSIONS: Parents reported high levels of mobile device use and technoference around their youngest children. Most parents across a wide sociodemographic spectrum, especially younger parents, found the use of AI tools to help mitigate technoference during parent-child daily interaction acceptable and useful.


Asunto(s)
Inteligencia Artificial , Padres , Preescolar , Estudios Transversales , Humanos , Relaciones Padres-Hijo , Responsabilidad Parental , Tecnología
11.
NPJ Digit Med ; 4(1): 5, 2021 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-33420381

RESUMEN

A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.

12.
J Am Med Inform Assoc ; 27(8): 1316-1320, 2020 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-32712656

RESUMEN

OBJECTIVE: Hand hygiene is essential for preventing hospital-acquired infections but is difficult to accurately track. The gold-standard (human auditors) is insufficient for assessing true overall compliance. Computer vision technology has the ability to perform more accurate appraisals. Our primary objective was to evaluate if a computer vision algorithm could accurately observe hand hygiene dispenser use in images captured by depth sensors. MATERIALS AND METHODS: Sixteen depth sensors were installed on one hospital unit. Images were collected continuously from March to August 2017. Utilizing a convolutional neural network, a machine learning algorithm was trained to detect hand hygiene dispenser use in the images. The algorithm's accuracy was then compared with simultaneous in-person observations of hand hygiene dispenser usage. Concordance rate between human observation and algorithm's assessment was calculated. Ground truth was established by blinded annotation of the entire image set. Sensitivity and specificity were calculated for both human and machine-level observation. RESULTS: A concordance rate of 96.8% was observed between human and algorithm (kappa = 0.85). Concordance among the 3 independent auditors to establish ground truth was 95.4% (Fleiss's kappa = 0.87). Sensitivity and specificity of the machine learning algorithm were 92.1% and 98.3%, respectively. Human observations showed sensitivity and specificity of 85.2% and 99.4%, respectively. CONCLUSIONS: A computer vision algorithm was equivalent to human observation in detecting hand hygiene dispenser use. Computer vision monitoring has the potential to provide a more complete appraisal of hand hygiene activity in hospitals than the current gold-standard given its ability for continuous coverage of a unit in space and time.


Asunto(s)
Algoritmos , Higiene de las Manos , Procesamiento de Imagen Asistido por Computador , Grabación en Video , California , Infección Hospitalaria/prevención & control , Hospitales Pediátricos , Humanos , Control de Infecciones , Aprendizaje Automático , Redes Neurales de la Computación , Personal de Hospital
14.
AMIA Annu Symp Proc ; 2020: 1373-1382, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-34025905

RESUMEN

Open, or non-laparoscopic surgery, represents the vast majority of all operating room procedures, but few tools exist to objectively evaluate these techniques at scale. Current efforts involve human expert-based visual assessment. We leverage advances in computer vision to introduce an automated approach to video analysis of surgical execution. A state-of-the-art convolutional neural network architecture for object detection was used to detect operating hands in open surgery videos. Automated assessment was expanded by combining model predictions with a fast object tracker to enable surgeon-specific hand tracking. To train our model, we used publicly available videos of open surgery from YouTube and annotated these with spatial bounding boxes of operating hands. Our model's spatial detections of operating hands significantly outperforms the detections achieved using pre-existing hand-detection datasets, and allow for insights into intra-operative movement patterns and economy of motion.


Asunto(s)
Mano , Movimiento , Automatización , Computadores , Humanos , Redes Neurales de la Computación , Cirujanos
15.
NPJ Digit Med ; 2: 11, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31304360

RESUMEN

Early and frequent patient mobilization substantially mitigates risk for post-intensive care syndrome and long-term functional impairment. We developed and tested computer vision algorithms to detect patient mobilization activities occurring in an adult ICU. Mobility activities were defined as moving the patient into and out of bed, and moving the patient into and out of a chair. A data set of privacy-safe-depth-video images was collected in the Intermountain LDS Hospital ICU, comprising 563 instances of mobility activities and 98,801 total frames of video data from seven wall-mounted depth sensors. In all, 67% of the mobility activity instances were used to train algorithms to detect mobility activity occurrence and duration, and the number of healthcare personnel involved in each activity. The remaining 33% of the mobility instances were used for algorithm evaluation. The algorithm for detecting mobility activities attained a mean specificity of 89.2% and sensitivity of 87.2% over the four activities; the algorithm for quantifying the number of personnel involved attained a mean accuracy of 68.8%.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...